The COVID-19 Pandemic is one of the most unprecendented events in human history and for most of us, the most monumental global event we have come to witness. It has impacted every aspect of daily life from mental health to socialization tactics to the theft and prices of puppies skyrocketing in the past one year. There are several ways to investigate the effect of COVID-19 on human behaviour, and the one way we have chosen to do that is to look at how this era has impacted differnt industries. Through stock data, we cna see the human response to the pandemic as people put their money where they think there is value. Some companies sunk, some swam and some built a Noah's ark and cruised. This analysis is highlighting which copmanies, and ultimately, which industries did just that.
What companies or industries have benefited from the pandemic? What new businesses have been created in response to the pandemic?
The approach taken to investigate this research was to obtain stock price data for over 4,000 companies from the Robinhood trading app from Ocotber 2019 to present day. The mean price of each stock for October 2019 was compared to the mean price of October 2020. This is because October 2019 was shortly before the pandemic which showcases the company's/industry's publicly perceived value before widespread panic about the pandemic broke while by October 2020, the initial frenzy surrouding the pandemic had meted out and allowed for the public perceived value of the cmopany to be reflective of what they truly thought of the company's value in a pandemic crisis.
Robinhood was chosen due to it's vast user base of over 15 million people and wide array of stocks as users have access to over 5,000 stocks. This was taken into consideration to eliminate selection bias as the Robinhood userbase is fairly representative of the vast majority of America's public due to the fact that it requires very little capital to begin investing.
import pandas as pd
import numpy as np
%matplotlib inline
# libraries for graphing
import matplotlib.pyplot as plt
import plotly.express as px
import plotly.graph_objects as go
# libraries for web scraping
from bs4 import BeautifulSoup
births_business = pd.read_csv('births_businesses.csv').dropna().set_index('Date')
births_business
# scaling data
for i in births_business.columns:
dev = np.std(births_business[i])
births_business[i] = births_business[i]/dev
# plot birth of businesses
fig = px.line(births_business, x=births_business.index, y=births_business.columns,
title='Businesses Births per Industry')
fig.update_xaxes(rangeslider_visible = True)
fig.update_layout(margin={"r":0,"t":50,"l":0,"b":30})
fig.show()
The data was obtained using python code from this website: https://robin-stocks.readthedocs.io/en/latest/ to access the stock price data through the Robinhood API. For ech stock in Robinhood, the data detalis the industry of the stock, stock ticker, stock name and stock price - these were the relevant data fields that were used to conduct the analysis.
Firstly, in order to investigate what industries have prospered during the COVID-19 pandemic, we will take a look at UK goverment supplied data about births as well as deaths of new businesses in the time frame form 2017 to 2020.
Note: Data is going to by scaled (divided by standard deviation) in order to get better insights.
Note: One can play with variables visible on the graph, by simply clicking or unclicking them from the legend.
births_business = pd.read_csv('births_businesses.csv').dropna().set_index('Date')
births_business
# scaling data
for i in births_business.columns:
dev = np.std(births_business[i])
births_business[i] = births_business[i]/dev
# plot birth of businesses
fig = px.line(births_business, x=births_business.index, y=births_business.columns,
title='Businesses Births per Industry')
fig.update_xaxes(rangeslider_visible = True)
fig.update_layout(margin={"r":0,"t":50,"l":0,"b":30})
fig.show()
COMMENTS
The biggest increase in businesses births, in comparison to pre and after Q2 2020 (time when lockdown laws after first wave started to ease) is visible in following industries:
While biggest decrease in businesses births can be visible in following industries:
deaths_business = pd.read_csv('deaths_businesses.csv').dropna().set_index('Date')
deaths_business
# scaling data
for i in deaths_business.columns:
dev = np.std(deaths_business[i])
deaths_business[i] = deaths_business[i]/dev
# plot birth of businesses
fig = px.line(deaths_business, x=deaths_business.index, y=deaths_business.columns,
title='Businesses Deaths per Industry')
fig.update_xaxes(rangeslider_visible = True)
fig.update_layout(margin={"r":0,"t":50,"l":0,"b":30})
fig.show()
COMMENTS
The biggest increase in businesses births, in comparison to pre and after Q2 2020 (time when lockdown laws after first wave started to ease) is visible in following industries:
While biggest decrease in businesses births can be visible in following industries:
import pandas as pd
import numpy as np
%matplotlib inline
# libraries for graphing
import matplotlib.pyplot as plt
import plotly.express as px
import plotly.graph_objects as go
# libraries for web scraping
from bs4 import BeautifulSoup
import xarray as xr
import pickle as pkl
_prices = pkl.load(open('data_xarray_prices.pkl', 'rb'))
stock_to_ticker = pkl.load(open('data_stock_number_to_ticker.pkl', 'rb'))
info = pkl.load(open('data_pandas_stock_info.pkl', 'rb'))
prices = _prices.data
times = _prices.dttm.data# (http://_prices.dttm.data)
df = _prices.to_pandas()
df.columns=stock_to_ticker
df = df[list((set(info.index).intersection(set(df.columns))))]
df.head(3)
| NEWR | ATOS | RMBI | COG | AWR | EFOI | CTAS | PODD | BSX | NEWT | MS | NXGN | LOPE | CCAP | MSBI | NMIH | AAN | HUBG | TEX | SWN | IMAC | APEI | DAR | PFLT | TCFC | AP | MNDO | QLI | UNVR | BPTH | DJCO | HA | SNDX | PDCE | TAP.A | AGI | LSTR | CGIX | VNT | HWM | ... | OVBC | RMTI | CUTR | PMBC | JBL | SAND | BHAT | KALV | CRTX | PSX | TBIO | SSNC | NETE | MASI | CPE | HOTH | FNF | EXEL | PBCT | NXTD | CMRE | OKE | LAZY | HRB | SUM | FOE | UBFO | EBSB | TSHA | BRO | OSIS | TROW | PETZ | HSON | REED | SYY | PRI | TRN | CALX | EAF | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| dttm | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| 2013-01-02 05:00:00 | NaN | 684.0 | NaN | NaN | 24.615 | NaN | 41.99 | 21.49 | 5.89 | 9.7495 | 19.62 | NaN | 23.53 | NaN | NaN | NaN | 28.70 | 34.35 | 29.65 | 33.35 | NaN | 36.96 | 16.55 | 13.09 | NaN | 20.87 | 1.9801 | NaN | NaN | NaN | 95.00 | 6.74 | NaN | 34.40 | 43.34 | NaN | 53.4700 | NaN | NaN | NaN | ... | 18.81 | 8.13 | 9.00 | 6.48 | 19.60 | 12.37 | NaN | NaN | 5.18 | 55.26 | NaN | 11.595 | 336.0 | 21.33 | 51.9 | NaN | 24.66 | 4.83 | 12.44 | NaN | 14.44 | 44.11 | NaN | 19.06 | NaN | 4.29 | 2.1954 | 16.93 | NaN | 13.085 | 65.18 | 66.98 | NaN | 47.4 | 5.90 | 31.90 | 30.75 | 18.555 | 7.88 | NaN |
| 2013-01-03 05:00:00 | NaN | 775.8 | NaN | NaN | 24.355 | NaN | 42.18 | 21.72 | 5.95 | 9.4000 | 19.58 | NaN | 23.80 | NaN | NaN | NaN | 28.78 | 34.26 | 29.06 | 33.47 | NaN | 37.49 | 16.16 | 13.01 | NaN | 20.58 | 2.0400 | NaN | NaN | NaN | 95.96 | 6.85 | NaN | 34.81 | 43.49 | NaN | 53.5031 | NaN | NaN | NaN | ... | 18.64 | 7.67 | 8.95 | 6.47 | 19.50 | 11.78 | NaN | NaN | 5.21 | 53.19 | NaN | 11.550 | 340.0 | 19.56 | 50.5 | NaN | 24.44 | 4.80 | 12.48 | NaN | 14.45 | 44.00 | NaN | 19.14 | NaN | 4.50 | 2.1954 | 16.75 | NaN | 13.155 | 64.97 | 66.99 | NaN | 46.7 | 6.06 | 31.76 | 31.23 | 18.375 | 7.65 | NaN |
| 2013-01-04 05:00:00 | NaN | 738.0 | NaN | NaN | 24.515 | NaN | 42.59 | 21.81 | 5.92 | 9.4500 | 20.19 | NaN | 23.78 | NaN | NaN | NaN | 29.11 | 34.90 | 29.58 | 34.34 | NaN | 37.87 | 16.47 | 13.26 | NaN | 20.82 | 2.1001 | NaN | NaN | NaN | 102.24 | 7.11 | NaN | 35.23 | 42.69 | NaN | 53.5800 | NaN | NaN | NaN | ... | 18.84 | 7.73 | 8.98 | 6.45 | 19.44 | 11.74 | NaN | NaN | 5.43 | 53.14 | NaN | 11.470 | 343.0 | 19.53 | 52.4 | NaN | 24.45 | 4.83 | 12.60 | NaN | 14.70 | 44.37 | NaN | 19.25 | NaN | 4.57 | 2.1194 | 16.74 | NaN | 13.350 | 69.01 | 68.34 | NaN | 46.6 | 6.47 | 31.85 | 31.76 | 18.330 | 7.62 | NaN |
3 rows × 4264 columns
info.head(3)
| market_cap | sector | industry | id | simple_name | name | bloomberg_unique | country | list_date | min_tick_size | fractional_tradability | |
|---|---|---|---|---|---|---|---|---|---|---|---|
| symbol | |||||||||||
| MDB | 22518768705.573021 | Technology Services | Packaged Software | 11df6cea-5aa8-4f70-b13c-1b0321f93f7e | MongoDB | MongoDB, Inc. Class A Common Stock | EQ0000000020126743 | US | 2017-10-19 | None | tradable |
| LL | 852223745.697067 | Retail Trade | Specialty Stores | 77d814e6-a2fe-4c80-8b06-9a1345401e2f | Lumber Liquidators | Lumber Liquidators Holdings, Inc. | EQ0000000003466040 | US | 2007-11-09 | None | tradable |
| CLII | 515103444.977506 | Finance | Financial Conglomerates | 929852a2-f7aa-49e4-b1f1-b1bc20336fa5 | Climate Change Crisis Real Impact | Climate Change Crisis Real Impact I Acquisitio... | EQ0000000087786167 | US | 2020-11-20 | None | tradable |
# Create Masks for taking values for just Oct 2019 and Oct 2020
start_2019 = np.datetime64('2019-10-01')
end_2019 = np.datetime64('2019-11-01')
start_2020 = np.datetime64('2020-10-01')
end_2020 = np.datetime64('2020-11-01')
mask_oct_2019 = (df.index > start_2019)&(df.index < end_2019)
mask_oct_2020 = (df.index > start_2020)&(df.index < end_2020)
# Mean prices for Oct 2019
# Drop cheap stocks
mean_oct_2019 = df[mask_oct_2019].mean(axis = 0)
mask_cheap_2019 = mean_oct_2019>3
mean_oct_2019 = mean_oct_2019[mask_cheap_2019]
mean_oct_2019
NEWR 61.481304
RMBI 13.797391
COG 18.029130
AWR 92.631304
CTAS 266.936957
...
SYY 78.842174
PRI 123.532174
TRN 18.053913
CALX 6.907826
EAF 11.952174
Length: 3183, dtype: float64
# Mean prices for Oct 2020
mean_oct_2020 = df[mask_oct_2020].mean(axis = 0)
#mean_oct_2020 = mean_oct_2020[mean_oct_2020>3]
mean_oct_2020
NEWR 61.951818
ATOS 2.038182
RMBI 10.989545
COG 18.712727
AWR 77.017273
...
SYY 63.417273
PRI 114.987273
TRN 20.226364
CALX 21.800000
EAF 7.111364
Length: 4264, dtype: float64
# Growth from Oct 2019
growth = (mean_oct_2020-mean_oct_2019)/mean_oct_2019
growth = growth[mask_cheap_2019]
#growth.set_index
#growth = growth
growth = growth.dropna()
sorted_growth = growth.sort_values(ascending=False)
def obtain_sector(company):
return info[info.index==company]['sector'].values[0]
def obtain_name(company):
name = info[info.index==company]['name'].values[0]
name = name.split(',')[0]
return name#info[info.index==company]['name'].values[0]
growth_top_50 = sorted_growth[0:50].to_frame()
# Add column sector to the new dataframe
growth_top_50['sector'] = growth_top_50.index.values
growth_top_50['sector'] = growth_top_50['sector'].apply(lambda x: obtain_sector(x))
growth_top_50['name'] = growth_top_50.index.values
growth_top_50['name'] = growth_top_50['name'].apply(lambda x: obtain_name(x))
#growth_top_50['name'] = growth_top_50['name'].apply()
#covid_score_df_c.head()
growth_top_50.columns = ['Growth', 'Sector','Name']
growth_top_50 = growth_top_50.set_index('Name')
growth_top_50.head(10)
| Growth | Sector | |
|---|---|---|
| Name | ||
| Novavax | 20.830793 | Health Technology |
| Seres Therapeutics | 8.159857 | Health Technology |
| BioXcel Therapeutics | 7.639204 | Health Technology |
| Tesla | 7.019558 | Consumer Durables |
| Fiverr International Ltd. | 6.537670 | Technology Services |
| Zoom Video Communications | 6.292386 | Technology Services |
| ChemoCentryx | 6.237719 | Health Technology |
| Overstock.com | 5.989517 | Retail Trade |
| Celsius Holdings | 5.553347 | Consumer Non-Durables |
| Workhorse Group | 5.424402 | Producer Manufacturing |
import seaborn as sns
sectors = growth_top_50['Sector'].unique()
fig, ax = plt.subplots(figsize=(12, 14))
#colors = {'travel':'r', 'delivery':'b', 'insurance':'g', 'retail':'cyan',
# 'finance':'purple', 'technology':'orange', 'telecommunications':'yellow'}
color_list = sns.color_palette("tab20",as_cmap = True).colors[0:15]
color_dict = {sector:color for sector,color in zip(list(sectors),color_list)}
growth_top_50['Growth'].plot(kind='barh',
color=list(growth_top_50.Sector.map(color_dict)),
alpha=0.8)
#plt.barh(growth_top_50['Growth'],10)
ax.set_xlabel('Growth')
ax.set_title( 'Growth of Stock Prices by Company and Industry')
labels = list(color_dict.keys())
handles = [plt.Rectangle((0,0),1,1, color=color_dict[label]) for label in labels]
plt.legend(handles, labels)
plt.gca().invert_yaxis()
fig.show()
growth_bottom_50 = sorted_growth.dropna()[-50:].to_frame()
# Add column sector to the new dataframe
growth_bottom_50['sector'] = growth_bottom_50.index.values
growth_bottom_50['sector'] = growth_bottom_50['sector'].apply(lambda x: obtain_sector(x))
growth_bottom_50['name'] = growth_bottom_50.index.values
growth_bottom_50['name'] = growth_bottom_50['name'].apply(lambda x: obtain_name(x))
#growth_top_50['name'] = growth_top_50['name'].apply()
#covid_score_df_c.head()
growth_bottom_50.columns = ['Growth', 'Sector','Name']
growth_bottom_50 = growth_bottom_50.set_index('Name')
growth_bottom_50.head()
| Growth | Sector | |
|---|---|---|
| Name | ||
| Laredo Petroleum | -0.801348 | Energy Minerals |
| Kosmos Energy Ltd. | -0.802559 | Energy Minerals |
| SM Energy Company | -0.807793 | Energy Minerals |
| Liberty TripAdvisor Holdings | -0.808302 | Consumer Services |
| PBF ENERGY INC. | -0.808894 | Energy Minerals |
import seaborn as sns
sectors = growth_bottom_50['Sector'].unique()
fig, ax = plt.subplots(figsize=(12, 14))
#colors = {'travel':'r', 'delivery':'b', 'insurance':'g', 'retail':'cyan',
# 'finance':'purple', 'technology':'orange', 'telecommunications':'yellow'}
color_list = sns.color_palette("tab20",as_cmap = True).colors[0:len(sectors)]
color_dict = {sector:color for sector,color in zip(list(sectors),color_list)}
growth_bottom_50['Growth'].plot(kind='barh',
color=list(growth_bottom_50.Sector.map(color_dict)),
alpha=0.8)
#plt.barh(growth_top_50['Growth'],10)
ax.set_xlabel('Growth')
ax.set_title( 'Growth of Stock Prices by Company and Industry')
labels = list(color_dict.keys())
handles = [plt.Rectangle((0,0),1,1, color=color_dict[label]) for label in labels]
plt.legend(handles, labels)
#plt.gca().invert_yaxis()
fig.show()
Here we analyse the growth we obtained before over different sectors.
growth_sector = sorted_growth.to_frame()
# Add column sector to the new dataframe
growth_sector['sector'] = growth_sector.index.values
growth_sector['sector'] = growth_sector['sector'].apply(lambda x: obtain_sector(x))
growth_sector['name'] = growth_sector.index.values
growth_sector['name'] = growth_sector['name'].apply(lambda x: obtain_name(x))
#growth_top_50['name'] = growth_top_50['name'].apply()
#covid_score_df_c.head()
growth_sector.columns = ['Growth', 'Sector', 'Name']
growth_sector = growth_sector.set_index('Name')
growth_sector.head()
| Growth | Sector | |
|---|---|---|
| Name | ||
| Novavax | 20.830793 | Health Technology |
| Seres Therapeutics | 8.159857 | Health Technology |
| BioXcel Therapeutics | 7.639204 | Health Technology |
| Tesla | 7.019558 | Consumer Durables |
| Fiverr International Ltd. | 6.537670 | Technology Services |
all_sectors = growth_sector.Sector.unique()
avg_growth_by_sector = {}
for sector in all_sectors:
avg_growth_by_sector[sector] = growth_sector[growth_sector.Sector == sector].Growth.mean()
avg_growth_by_sector = pd.Series(avg_growth_by_sector)
avg_growth_by_sector = avg_growth_by_sector.sort_values(ascending=False)
avg_growth_by_sector = avg_growth_by_sector - np.mean(avg_growth_by_sector)
avg_growth_by_sector = avg_growth_by_sector.dropna()
fig, ax = plt.subplots(figsize=(12, 14))
avg_growth_by_sector.plot(kind='barh', alpha = 0.7)
ax.set_xlabel('Relative Growth')
ax.set_title( 'Winning and Losing Sectors')
plt.gca().invert_yaxis()
fig.show()
growth_industry = sorted_growth.to_frame()
# Add column sector to the new dataframe
growth_industry['industry'] = growth_industry.index.values
growth_industry['industry'] = growth_industry['industry'].apply(lambda x: obtain_sector(x))
growth_industry['name'] = growth_industry.index.values
growth_industry['name'] = growth_industry['name'].apply(lambda x: obtain_name(x))
growth_industry.columns = ['Growth', 'Industry', 'Name']
growth_industry = growth_industry.set_index('Name')
growth_industry.head()
| Growth | Industry | |
|---|---|---|
| Name | ||
| Novavax | 20.830793 | Health Technology |
| Seres Therapeutics | 8.159857 | Health Technology |
| BioXcel Therapeutics | 7.639204 | Health Technology |
| Tesla | 7.019558 | Consumer Durables |
| Fiverr International Ltd. | 6.537670 | Technology Services |
all_industries = growth_industry.Industry.unique()
avg_growth_by_industry = {}
for industry in all_industries:
avg_growth_by_industry[industry] = growth_sector[growth_industry.Industry == industry].Growth.mean()
if np.std(avg_growth_by_industry[industry])!= 0:
avg_growth_by_industry[industry] = avg_growth_by_industry[industry]/np.std(avg_growth_by_industry[industry])
avg_growth_by_industry = pd.Series(avg_growth_by_industry)
avg_growth_by_industry = avg_growth_by_industry.sort_values(ascending=False)
avg_growth_by_industry = (avg_growth_by_industry - np.mean(avg_growth_by_industry))
avg_growth_by_industry = avg_growth_by_industry.dropna()
fig, ax = plt.subplots(figsize=(10, 12))
avg_growth_by_industry[0:10].plot(kind='barh', color = 'g', alpha = 0.5)
ax.set_xlabel('Relative Growth')
ax.set_title( 'Top 10 Winning Industries for 2020')
plt.gca().invert_yaxis()
fig.show()
fig, ax = plt.subplots(figsize=(10, 12))
avg_growth_by_industry[-10:].plot(kind='barh', color = 'r', alpha = 0.5)
ax.set_xlabel('Relative Growth')
ax.set_title( 'Top 10 Losing Industries for 2020')
#plt.gca().invert_yaxis()
fig.show()
To aquire data for top 100 companies, we will webscrape the data from a Financial Times article. Financial Times doesn't allow for webscraping using requests tool in python, however one of us had subsription and was able to download the source code to the text file, that was used to extract required data.
with open('FTarticle.txt', 'r') as f:
article=f.read()
soup = BeautifulSoup(article, "html")
article_div = soup.find('article')
# all headers and sectors are in <h2> tag
headers_sectors = article_div.find_all('h2')
# all odd elements correspond to company
# all even elements corresponf to a sector
companies = []
sectors = []
for index, elem in enumerate(headers_sectors[:-3]):
if index % 2 == 0:
companies.append(elem.text[3::].strip())
else:
sectors.append(elem.text.split(' ')[1].capitalize())
companies[-1] = companies[-1][2::]
# numeric data
numbers = article_div.find_all('span', {'class':"n-content-big-number__title"})
# numeric data has similar structure as headers-sectors
# odd elements correspond to increase in market value
# even elements correspond to end-2020 market value
share_increase = []
end_2020_value = []
for index, elem in enumerate(numbers):
if index % 2 == 0:
share_increase.append(int(elem.text.strip()[:-1]))
else:
text = elem.text.strip()[1::]
if text[-2::] =='bn':
end_2020_value.append(float(text[:-2]))
if text[-2::] == 'tn':
end_2020_value.append(float(text[:-2])*1000)
The aquired data can summarised in a tabular form as below.
top_100 = pd.DataFrame({ "Company": companies,
"Sector": sectors,
"Market share increase [%]":share_increase,
"End-2020 market value [bn $]": end_2020_value})\
.sort_values(by=['Market share increase [%]'], ascending=False)
top_100
| Company | Sector | Market share increase [%] | End-2020 market value [bn $] | |
|---|---|---|---|---|
| 0 | Tesla | Automotive | 787 | 669.0 |
| 1 | Sea Group | Home | 446 | 102.0 |
| 2 | Zoom Video | Video | 413 | 96.0 |
| 3 | Pinduoduo | Ecommerce | 396 | 218.0 |
| 4 | BYD | Automotive | 359 | 78.0 |
| ... | ... | ... | ... | ... |
| 95 | Kweichow Moutai | Beverages | 80 | 384.0 |
| 96 | Henan Shuanghui Investment & Development | Food | 80 | 25.0 |
| 97 | Marvell Technology | Data | 79 | 32.0 |
| 98 | MediaTek | Semiconductors | 79 | 42.0 |
| 99 | Amazon | Ecommerce | 79 | 1600.0 |
100 rows × 4 columns
COMMENT:
Top companies are Tesla, Sea Group and Zoom Video. We will now aggregate data per industry.
top_sectors = top_100.pivot_table(top_100, index = 'Sector', aggfunc='mean')\
.sort_values(by=['Market share increase [%]'], ascending=False)
top_sectors.head()
| End-2020 market value [bn $] | Market share increase [%] | |
|---|---|---|
| Sector | ||
| Home | 102.000000 | 446.0 |
| Video | 96.000000 | 413.0 |
| Automotive | 260.333333 | 411.0 |
| Cyber | 47.000000 | 357.0 |
| Auto | 125.000000 | 271.0 |
fig = px.bar(top_sectors, x=top_sectors.index, y=top_sectors.columns, barmode='group',
title='Top sectors sorted by market share increase')
fig.update_xaxes(rangeslider_visible = True)
fig.update_layout(margin={"r":0,"t":50,"l":0,"b":30})
fig.show()
COMMENT:
It's visible that the sectors with biggest increase are Home, Video and Automotive. At the smae time, sectors with biggest end-2020 market value are Ecommerce, Online and Automotive.
From the graphs above, we see that the top 5 industries to thrive during the pandemic were:
while the top 5 to sink were:
As this data is specific to companies' stock data, there are clashes withing industries such as the Technology services industries as the rise or fall of a major company in that industry could influence the overall performance of that industry.
As with any piece of work, there are always improvements that can be made. Below are the top limitations and imporvements that could be made to improve the accuracy of this analysis:
Limitations:
Improvements: